166 research outputs found

    Simplifying the Design of Knowledge-Based Algorithms Using Knowledge Consistency

    Get PDF
    AbstractProcessor knowledge is an important tool in the study of distributed computer systems. It has led to better understanding of existing algorithms for such systems and to the development of new knowledge-based algorithms. Some of these algorithms use forms of knowledge (e.g., common knowledge) that cannot be achieved in certain systems. This paper considers alternative interpretations of knowledge under which these forms of knowledge can be achieved. It explores consistent knowledge interpretations and shows how they can be used to circumvent the known impossibility results in a number of cases. This may lead to greater applicability of knowledge-based algorithms

    Persistency semantics of the Intel-x86 architecture

    Get PDF
    Emerging non-volatile memory (NVM) technologies promise the durability of disks with the performance of RAM. To describe the persistency guarantees of NVM, several memory persistency models have been proposed in the literature. However, the persistency semantics of the ubiquitous x86 architecture remains unexplored to date. To close this gap, we develop the Px86 (‘persistent x86’) model, formalising the persistency semantics of Intel-x86 for the first time. We formulate Px86 both operationally and declaratively, and prove that the two characterisations are equivalent. To demonstrate the application of Px86, we develop two persistent libraries over Px86: a persistent transactional library, and a persistent variant of the Michael–Scott queue. Finally, we encode our declarative Px86 model in Alloy and use it to generate persistency litmus tests automatically

    An Optimal Self-Stabilizing Firing Squad

    Full text link
    Consider a fully connected network where up to tt processes may crash, and all processes start in an arbitrary memory state. The self-stabilizing firing squad problem consists of eventually guaranteeing simultaneous response to an external input. This is modeled by requiring that the non-crashed processes "fire" simultaneously if some correct process received an external "GO" input, and that they only fire as a response to some process receiving such an input. This paper presents FireAlg, the first self-stabilizing firing squad algorithm. The FireAlg algorithm is optimal in two respects: (a) Once the algorithm is in a safe state, it fires in response to a GO input as fast as any other algorithm does, and (b) Starting from an arbitrary state, it converges to a safe state as fast as any other algorithm does.Comment: Shorter version to appear in SSS0

    On the Performance of a Retransmission-Based Synchronizer

    Get PDF
    International audienceDesigning algorithms for distributed systems that provide a round abstraction is often simpler than designing for those that do not provide such an abstraction. However, distributed systems need to tolerate various kinds of failures. The concept of a synchronizer deals with both: It constructs rounds and allows masking of transmission failures. One simple way of dealing with transmission failures is to retransmit a message until it is known that the message was successfully received. We calculate the exact value of the average rate of a retransmission-based synchronizer in an environment with probabilistic message loss, within which the synchronizer shows nontrivial timing behavior. The theoretic results, based on Markov theory, are backed up with Monte Carlo simulations

    ‘Maintaining balance and harmony’: Javanese perceptions of health and cardiovascular disease

    Get PDF
    Community intervention programmes to reduce cardiovascular disease (CVD) risk factors within urban communities in developing countries are rare. One possible explanation is the difficulty of designing an intervention that corresponds to the local context and culture

    Cost-effectiveness of external cephalic version for term breech presentation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>External cephalic version (ECV) is recommended by the American College of Obstetricians and Gynecologists to convert a breech fetus to vertex position and reduce the need for cesarean delivery. The goal of this study was to determine the incremental cost-effectiveness ratio, from society's perspective, of ECV compared to scheduled cesarean for term breech presentation.</p> <p>Methods</p> <p>A computer-based decision model (TreeAge Pro 2008, Tree Age Software, Inc.) was developed for a hypothetical base case parturient presenting with a term singleton breech fetus with no contraindications for vaginal delivery. The model incorporated actual hospital costs (e.g., 8,023forcesareanand8,023 for cesarean and 5,581 for vaginal delivery), utilities to quantify health-related quality of life, and probabilities based on analysis of published literature of successful ECV trial, spontaneous reversion, mode of delivery, and need for unanticipated emergency cesarean delivery. The primary endpoint was the incremental cost-effectiveness ratio in dollars per quality-adjusted year of life gained. A threshold of 50,000perqualityadjustedlifeyears(QALY)wasusedtodeterminecosteffectiveness.</p><p>Results</p><p>TheincrementalcosteffectivenessofECV,assumingabaseline5850,000 per quality-adjusted life-years (QALY) was used to determine cost-effectiveness.</p> <p>Results</p> <p>The incremental cost-effectiveness of ECV, assuming a baseline 58% success rate, equaled 7,900/QALY. If the estimated probability of successful ECV is less than 32%, then ECV costs more to society and has poorer QALYs for the patient. However, as the probability of successful ECV was between 32% and 63%, ECV cost more than cesarean delivery but with greater associated QALY such that the cost-effectiveness ratio was less than $50,000/QALY. If the probability of successful ECV was greater than 63%, the computer modeling indicated that a trial of ECV is less costly and with better QALYs than a scheduled cesarean. The cost-effectiveness of a trial of ECV is most sensitive to its probability of success, and not to the probabilities of a cesarean after ECV, spontaneous reversion to breech, successful second ECV trial, or adverse outcome from emergency cesarean.</p> <p>Conclusions</p> <p>From society's perspective, ECV trial is cost-effective when compared to a scheduled cesarean for breech presentation provided the probability of successful ECV is > 32%. Improved algorithms are needed to more precisely estimate the likelihood that a patient will have a successful ECV.</p

    Pets as Sentinels of Human Exposure to Neurotoxic Metals

    Get PDF
    The idea that animals may be used as sentinels of environmental hazards pending over humans and the associated public health implications is not a new one. Nowadays pets are being used as bioindicators for the effects of environmental contaminants in human populations. This is of paramount importance due to the large increase in the worldwide distribution of synthetic chemicals, particularly in the built environment. Companion animals share the habitat with humans being simultaneously exposed to and suffering the same disease spectrum as their masters. Moreover, their shorter latency periods (due to briefer lifespans) enable them to act as early warning systems, allowing timely public health interventions. The rise on ethical constraints on the use of animals and, consequently, on the sampling they can be subjected to has led to the preferential use of noninvasive matrices, and in this case we are looking into hair. This chapter focuses in three non-essential metals: mercury, lead, and cadmium, due to their ubiquitous presence in the built environment and their ability of affecting the mammal nervous system. There is a fairly short amount of studies reporting the concentrations of these metals in pets’ hair, particularly for cats. These studies are characterized, and the metal concentrations corresponding to different parameters (e.g., age, sex, diet, rearing) are described in order to provide the reader with a general vision on the use of this noninvasive matrix on the studies conducted since the last two decades of the twentieth century.publishe
    corecore